28 research outputs found

    Towards dense object tracking in a 2D honeybee hive

    Full text link
    From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2 FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~7% of a typical body dimension, and orientation error of 12 degrees, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.Comment: 15 pages, including supplementary figures. 1 supplemental movie available as an ancillary fil

    Towards dense object tracking in a 2D honeybee hive

    Get PDF
    From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~ 7% of a typical body dimension, and orientation error of 12°, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.Funding for this work was provided by the OIST Graduate University to ASM and GS. Additional funding was provided by KAKENHI grants 16H06209 and 16KK0175 from the Japan Society for the Promotion of Science to AS

    The Diversity of Coral Reefs: What Are We Missing?

    Get PDF
    Tropical reefs shelter one quarter to one third of all marine species but one third of the coral species that construct reefs are now at risk of extinction. Because traditional methods for assessing reef diversity are extremely time consuming, taxonomic expertise for many groups is lacking, and marine organisms are thought to be less vulnerable to extinction, most discussions of reef conservation focus on maintenance of ecosystem services rather than biodiversity loss. In this study involving the three major oceans with reef growth, we provide new biodiversity estimates based on quantitative sampling and DNA barcoding. We focus on crustaceans, which are the second most diverse group of marine metazoans. We show exceptionally high numbers of crustacean species associated with coral reefs relative to sampling effort (525 species from a combined, globally distributed sample area of 6.3 m2). The high prevalence of rare species (38% encountered only once), the low level of spatial overlap (81% found in only one locality) and the biogeographic patterns of diversity detected (Indo-West Pacific>Central Pacific>Caribbean) are consistent with results from traditional survey methods, making this approach a reliable and efficient method for assessing and monitoring biodiversity. The finding of such large numbers of species in a small total area suggests that coral reef diversity is seriously under-detected using traditional survey methods, and by implication, underestimated

    Markerless tracking of an entire honey bee colony

    Get PDF
    From cells in tissue, to bird flocks, to human crowds, living systems display a stunning variety of collective behaviors. Yet quantifying such phenomena first requires tracking a significant fraction of the group members in natural conditions, a substantial and ongoing challenge. We present a comprehensive, computational method for tracking an entire colony of the honey bee Apis mellifera using high-resolution video on a natural honeycomb background. We adapt a convolutional neural network (CNN) segmentation architecture to automatically identify bee and brood cell positions, body orientations and within-cell states. We achieve high accuracy (-10% body width error in position, -10 degrees error in orientation, and true positive rate > 90%) and demonstrate months-long monitoring of sociometric colony fluctuations. These fluctuations include -24 h cycles in the counted detections, negative correlation between bee and brood, and nightly enhancement of bees inside comb cells. We combine detected positions with visual features of organism-centered images to track individuals over time and through challenging occluding events, recovering -79% of bee trajectories from five observation hives over 5 min timespans. The trajectories reveal important individual behaviors, including waggle dances and crawling inside comb cells. Our results provide opportunities for the quantitative study of collective bee behavior and for advancing tracking techniques of crowded systems

    WormPose: Image synthesis and convolutional networks for pose estimation in C. elegans

    Get PDF
    An important model system for understanding genes, neurons and behavior, the nematode worm C. elegans naturally moves through a variety of complex postures, for which estimation from video data is challenging. We introduce an open-source Python package, WormPose, for 2D pose estimation in C. elegans, including self-occluded, coiled shapes. We leverage advances in machine vision afforded from convolutional neural networks and introduce a synthetic yet realistic generative model for images of worm posture, thus avoiding the need for human-labeled training. WormPose is effective and adaptable for imaging conditions across worm tracking efforts. We quantify pose estimation using synthetic data as well as N2 and mutant worms in on-food conditions. We further demonstrate WormPose by analyzing long (∼ 10 hour), fast-sampled (∼ 30 Hz) recordings of on-food N2 worms to provide a posture-scale analysis of roaming/dwelling behaviors

    Author Correction: Markerless tracking of an entire honey bee colony

    No full text
    The original version of this Article omitted from the author list the fourth author Alexander S. Mikheyev, who is from the Ecology and Evolution Unit, OIST Graduate University, Okinawa, Japan, and the Research School of Biology, Australian National University, Canberra, Australia. The third author Yoann Portugal has the following additional affiliation: Ecology and Evolution Unit, OIST Graduate University, Okinawa, Japan. The fourth author Alexander S. Mikheyev and the fifth author Greg J. Stephens declare equal contributions. Consequently, the Acknowledgements, which formerly read “We thank Michael Iuzzolino, Dieu My thanh Nguyen, Orit Peleg, and Michael Smith for comments on the manuscript and code testing. This work was supported by the Okinawa Institute of Science and Technology Graduate University”, have been corrected to “We are grateful to Takahashi Ikemiya for maintaining the experimental bee colonies. We thank Michael Iuzzolino, Dieu My Thanh Nguyen, Orit Peleg, and Michael Smith for comments on the manuscript and code testing. This work was supported by the Okinawa Institute of Science and Technology Graduate University. Additional funding was provided by KAKENHI grants 16H06209 and 16KK0175 from the Japan Society for the Promotion of Science to AM”. Additionally, the Author Contributions, which formerly read “Y.P. performed the bee work and devised the imaging setup, L. H. devised the labeling tool, K.B. performed method development and data analysis, K.B. and G.S. designed the study and wrote the manuscript”, has been corrected to “Y.P. performed the bee work, Y.P. and A.M. devised the imaging setup, L.H. devised the labeling tool, K.B. performed method development and data analysis, K.B., A.M., and G.S. designed the study, K.B. and G.S. wrote the manuscript”. This has been corrected in both the PDF and HTML versions of the Article. The original version of the Supplementary information associated with this Article contained an error in the description of Supplementary Table 2, which incorrectly read “All imaging data in this study were collected in 2019”. The correct version states “2018” in place of “2019”. The HTML has been updated to include a corrected version of the Supplementary information
    corecore